Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available August 22, 2026
-
This study examined the relations among strategic planning, execution, and strategy efficiency during problem-solving in a digital algebra learning game with 7th-grade students. We used pre-solving pause time as a proxy indicator of strategic planning, and the productivity of the initial strategy as a measure of effective strategy execution. Additionally, we explored how these variables correlated with students’ posttest scores assessing algebraic knowledge. Mediation analyses at both the problem and student levels indicated that longer pre-solving pause times were associated with greater strategy efficiency. When considering both the direct and indirect effects of pre-solving pause time on strategy efficiency, the results revealed a partial positive mediation through the productivity of the initial strategy. Lastly, the results of a path analysis showed that strategy efficiency significantly predicted algebraic knowledge with a positive effect. These findings suggest that longer pause times are associated with more efficient problem solving as they increase the likelihood of a productive initial step, highlighting a positive mediating role of execution in the relation between planning and strategy efficiency in algebraic problem solving.more » « lessFree, publicly-accessible full text available August 23, 2026
-
In optimal experimental design, the objective is to select a limited set of experiments that maximizes information about unknown model parameters based on factor levels. This work addresses the generalized D-optimal design problem, allowing for nonlinear relationships in factor levels. We develop scalable algorithms suitable for cases where the number of candidate experiments grows exponentially with the factor dimension, focusing on both first- and second-order models under design constraints. Particularly, our approach integrates convex relaxation with pricing-based local search techniques, which can provide upper bounds and performance guarantees. Unlike traditional local search methods, such as the ``Fedorov exchange" and its variants, our method effectively accommodates arbitrary side constraints in the design space. Furthermore, it yields both a feasible solution and an upper bound on the optimal value derived from the convex relaxation. Numerical results highlight the efficiency and scalability of our algorithms, demonstrating superior performance compared to the state-of-the-art commercial software, \texttt{JMP}.more » « lessFree, publicly-accessible full text available August 3, 2026
-
Free, publicly-accessible full text available May 6, 2026
-
Accurate identification of inundated areas is crucial for mitigating the impacts of flooding, which causes numerous casualties and significant economic losses. While polarimetric synthetic aperture radar (PolSAR) data have been utilized to detect inundated regions, the information contained within PolSAR features remains severely underutilized. We introduce a novel approach that involves extracting a large number of PolSAR features through various PolSAR decomposition techniques, selecting the most important ones using the decision tree–recursive feature elimination (DT-RFE) method, and ultimately detecting inundation using a convolutional neural network (CNN) model. The hybrid DT-RFE–CNN model was trained and tested over a region in southeastern North Carolina during Hurricane Florence on September 18, 2018, using PolSAR features derived from the Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR). In terms of flood-mapping efficacy, the DT-RFE–CNN model outperformed a CNN model that used only PolSAR data across all metrics in both the training and testing stages. The performance of the trained DT-RFE–CNN model was evaluated by testing it over the same region for four more days (September 19, 20, 22, and 23, 2018); it achieved an average accuracy, precision, recall, F1 score, and intersection-over-union of 0.9304, 0.9089, 0.9584, 0.9324, and 0.8738, respectively, outperforming both the classical Otsu method and the FT-Transformer model using features selected by DT-RFE. Finally, we assessed the model’s generalizability by mapping another significant flood event, caused by Hurricane Harvey in Texas between August and September 2017. Based on the results, the hybrid model can accurately detect flooding, even in regions on which it has not been trained. Thus, the proposed method can facilitate flood monitoring and response efforts.more » « lessFree, publicly-accessible full text available July 17, 2026
-
Free, publicly-accessible full text available March 18, 2026
-
Free, publicly-accessible full text available February 19, 2026
-
Large language models (LLMs) are notoriously memory-intensive during training, particularly with the popular AdamW optimizer. This memory burden necessitates using more or higher-end GPUs or reducing batch sizes, limiting training scalability and throughput. To address this, various memory-efficient optimizers have been proposed to reduce optimizer memory usage. However, they face critical challenges: (i) reliance on costly SVD operations; (ii) significant performance trade-offs compared to AdamW; and (iii) still substantial optimizer memory overhead to maintain competitive performance. In this work, we identify that AdamW's learning rate adaptation rule can be effectively coarsened as a structured learning rate update. Based on this insight, we propose Approximated Gradient Scaling for Memory-Efficient LLM Optimization (APOLLO), which approximates learning rate scaling using an auxiliary low-rank optimizer state based on pure random projection. This structured learning rate update rule makes APOLLO highly tolerant to further memory reductions while delivering comparable pre-training performance. Even its rank-1 variant, APOLLO-Mini, achieves superior pre-training performance compared to AdamW with SGD-level memory costs. Extensive experiments demonstrate that the APOLLO series performs on-par with or better than AdamW, while achieving greater memory savings by nearly eliminating the optimization states of AdamW. These savings provide significant system-level benefits: (1) Enhanced Throughput: 3x throughput on an 8xA100-80GB setup compared to AdamW by supporting 4x larger batch sizes. (2) Improved Model Scalability: Pre-training LLaMA-13B with naive DDP on A100-80GB GPUs without system-level optimizations. (3) Low-End GPU Friendly Pre-training: Pre-training LLaMA-7B on a single GPU using less than 12 GB of memory with weight quantization.more » « lessFree, publicly-accessible full text available February 17, 2026
-
This innovative practice WIP paper describes a pioneering National Science Foundation-supported research project designed to address the underrepresentation of minority groups in STEM fields by meeting the educational needs of community college student parents. The Holistic Oasis for Parents’ Education (HOPE) Program at Hostos Community College centers on a dual-enrollment model, wherein community college parenting students (HOPE Scholars) pursue their academic goals by taking STEM courses during the summer while their children, ranging from kindergarten through fifth grade, are engaged in enriching hands-on STEM activities on the college campus. By aligning the educational pursuits of parents and children, this program supports the academic advancement of adult learners and fosters a positive learning environment for the next generation. The Program's holistic approach empowers HOPE Scholars to accumulate summer credits in STEM courses and nurtures the STEM talent pipeline by inspiring the younger generation.more » « less
An official website of the United States government
